400 research outputs found

    Connectionism, Learning and Linguistics Structure

    Get PDF
    Institute for Communicating and Collaborative SystemsThis thesis presents a connectionist theory of how infinite languages may fit within finite minds.Arguments are presented against the distinction between linguistic competence and observable language performance.It is suggested that certain kinds of finite state automata, i.e recurrent neural networks are likely to have suffcient computational power,and the necessary generalization capability,to serve as models for the procesing and acquisition of linguistic structure

    The long road of statistical learning research: past, present and future

    Get PDF
    Published 21 November 2016 http://rstb.royalsocietypublishing.org/content/372/1711/20160047http://rstb.royalsocietypublishing.org/content/372/1711/20160047This paper was supported by the Israel Science Foundation (grant no. 217/14 awarded to R.F.), by the National Institute of Child Health and Human Development (RO1 HD 067364 awarded to Ken Pugh and R.F., PO1-HD 01994 awarded to Haskins Laboratories) and by the European Research Council (project ERC-ADG- 692502 awarded to R.F.)

    Towards a theory of individual differences in statistical learning

    Get PDF
    Published 21 November 2016http://rstb.royalsocietypublishing.org/content/372/1711/20160059In recent years, statistical learning (SL) research has seen a growing interest in tracking individual performance in SL tasks, mainly as a predictor of linguistic abilities. We review studies from this line of research and outline three presuppositions underlying the experimental approach they employ: (i) that SL is a unified theoretical construct; (ii) that current SL tasks are interchangeable, and equally valid for assessing SL ability; and (iii) that performance in the standard forced-choice test in the task is a good proxy of SL ability. We argue that these three critical presuppositions are subject to a number of theoretical and empirical issues. First, SL shows patterns of modality- and informational-specificity, suggesting that SL cannot be treated as a unified construct. Second, different SL tasks may tap into separate sub-components of SL that are not necessarily interchangeable. Third, the commonly used forced-choice tests in most SL tasks are subject to inherent limitations and confounds. As a first step, we offer a methodological approach that explicitly spells out a potential set of different SL dimensions, allowing for better transparency in choosing a specific SL task as a predictor of a given linguistic outcome. We then offer possible methodological solutions for better tracking and measuring SL ability. Taken together, these discussions provide a novel theoretical and methodological approach for assessing individual differences in SL, with clear testable predictions. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’.This article was supported by the Israel Science Foundation (Grant No. 217/14, awarded to R.F.), by the National Institute of Child Health and Human Development (Grant Nos. RO1 HD 067364, awarded to Ken Pugh and R.F., and PO1-HD 01994, awarded to Haskins Laboratories), and by the ERC (project 692502, awarded to R.F.). L.B. is a research fellow of the Fyssen Foundation

    Is there such a thing as a ‘good statistical learner’?

    Get PDF
    Available online 19 November 2021A growing body of research investigates individual differences in the learning of statistical structure, tying them to variability in cognitive (dis)abilities. This approach views statistical learning (SL) as a general individual ability that underlies performance across a range of cognitive domains. But is there a general SL capacity that can sort individuals from ‘bad’ to ‘good’ statistical learners? Explicating the suppositions underlying this approach, we suggest that current evidence supporting it is meager. We outline an alternative perspective that considers the variability of statistical environments within different cognitive domains. Once we focus on learning that is tuned to the statistics of real-world sensory inputs, an alternative view of SL computations emerges with a radically different outlook for SL research.This article was supported by the European Research Council (ERC) Advanced Grant Project 692502-L2STAT and the Israel Science Foundation (ISF) Grant Project 705/20, awarded to R.F. L.B. received funding from the ERC Advanced Grant Project 833029-LEARNATTEND. N.S. received funding from the ISF, grant number 48/2

    On-Line Individual Differences in Statistical Learning Predict Language Processing

    Get PDF
    Considerable individual differences in language ability exist among normally developing children and adults. Whereas past research have attributed such differences to variations in verbal working memory or experience with language, we test the hypothesis that individual differences in statistical learning may be associated with differential language performance. We employ a novel paradigm for studying statistical learning on-line, combining a serial-reaction time task with artificial grammar learning. This task offers insights into both the timecourse of and individual differences in statistical learning. Experiment 1 charts the micro-level trajectory for statistical learning of nonadjacent dependencies and provides an on-line index of individual differences therein. In Experiment 2, these differences are then shown to predict variations in participants’ on-line processing of long-distance dependencies involving center-embedded relative clauses. The findings suggest that individual differences in the ability to learn from experience through statistical learning may contribute to variations in linguistic performance

    How arbitrary is language?

    Get PDF
    It is a long established convention that the relationship between sounds and meanings of words is essentially arbitrary-typically the sound of a word gives no hint of its meaning. However, there are numerous reported instances of systematic sound meaning mappings in language, and this systematicity has been claimed to be important for early language development. In a large-scale corpus analysis of English, we show that sound-meaning mappings are more systematic than would be expected by chance. Furthermore, this systematicity is more pronounced for words involved in the early stages of language acquisition and reduces in later vocabulary development. We propose that the vocabulary is structured to enable systematicity in early language learning to promote language acquisition, while also incorporating arbitrariness for later language in order to facilitate communicative expressivity and efficiency

    Cognitive Constraints Built into Formal Grammars: Implications for Language Evolution

    Get PDF
    [Abstract] We study the validity of the cognitive independence assumption using an ensemble of artificial syntactic structures from various classes of dependency grammars. Our findings show that memory limitations have permeated current linguistic conceptions of grammar, suggesting that it may not be possible to adequately capture our unbounded capacity for language without incorporating cognitive constraints into the grammar formalism.Ministerio de Economía y Competitividad; TIN2017-85160-C2-1-RMinisterio de Economía y Competitividad; TIN2017-89244-RAgencia de Gestión de Ayudas Universitarias y de Investigación; 2017SGR-856Xunta de Galicia; ED431B 2017/0

    Memory limitations are hidden in grammar

    Get PDF
    The ability to produce and understand an unlimited number of different sentences is a hallmark of human language. Linguists have sought to define the essence of this generative capacity using formal grammars that describe the syntactic dependencies between constituents, independent of the computational limitations of the human brain. Here, we evaluate this independence assumption by sampling sentences uniformly from the space of possible syntactic structures. We find that the average dependency distance between syntactically related words, a proxy for memory limitations, is less than expected by chance in a collection of state-of-the-art classes of dependency grammars. Our findings indicate that memory limitations have permeated grammatical descriptions, suggesting that it may be impossible to build a parsimonious theory of human linguistic productivity independent of non-linguistic cognitive constraints.Comment: Version improved with reviewer feedbac

    Memory limitations are hidden in grammar

    Get PDF
    [Abstract] The ability to produce and understand an unlimited number of different sentences is a hallmark of human language. Linguists have sought to define the essence of this generative capacity using formal grammars that describe the syntactic dependencies between constituents, independent of the computational limitations of the human brain. Here, we evaluate this independence assumption by sampling sentences uniformly from the space of possible syntactic structures. We find that the average dependency distance between syntactically related words, a proxy for memory limitations, is less than expected by chance in a collection of state-of-the-art classes of dependency grammars. Our findings indicate that memory limitations have permeated grammatical descriptions, suggesting that it may be impossible to build a parsimonious theory of human linguistic productivity independent of non-linguistic cognitive constraints
    • …
    corecore